最近,图形神经网络(GNN)显着提高了图形上机器学习任务的性能。但是,这一技术突破使人们感到奇怪:GNN如何做出这样的决定,我们可以高度信心信任它的预测吗?当涉及到一些关键领域(例如生物医学)时,做出错误的决策可能会产生严重的后果,在应用它们之前解释GNN的内部工作机制至关重要。在本文中,我们为遵循消息传递方案GnnInterPreter的不同GNN的新型模型模型级解释方法提出了一种新颖的模型级解释方法,以解释GNN模型的高级决策过程。更具体地说,通过图形的连续放松和重新聚集技巧,GnnInterPreter学习了概率生成图分布,该分布在GNN模型的眼中生成了目标预测的最具代表性图。与唯一的现有作品相比,GnnInterPreter在生成具有不同类型的节点功能和边缘功能的解释图时更加有效,更灵活,而无需引入另一个Blackbox来解释GNN,而无需特定领域的知识。此外,在四个不同数据集上进行的实验研究表明,当模型是理想的情况下,GnnInterPreter生成的解释图可以匹配所需的图形模式,并揭示了如果存在任何模型。
translated by 谷歌翻译
基于深度学习的潜在表示已被广泛用于众多科学可视化应用,例如等法相似性分析,音量渲染,流场合成和数据减少,仅举几例。但是,现有的潜在表示主要以无监督的方式从原始数据生成,这使得很难合并域兴趣以控制潜在表示的大小和重建数据的质量。在本文中,我们提出了一种新颖的重要性驱动的潜在表示,以促进领域利益引导的科学数据可视化和分析。我们利用空间重要性图来代表各种科学利益,并将它们作为特征转化网络的输入来指导潜在的生成。我们通过与自动编码器一起训练的无损熵编码算法,进一步降低了潜在尺寸,从而提高了存储和存储效率。我们通过多个科学可视化应用程序的数据进行定性和定量评估我们方法产生的潜图的有效性和效率。
translated by 谷歌翻译
我们提出了VDL-Surogate,这是一种基于视图的神经网络贴属替代模型,用于集合模拟的参数空间探索,该模拟允许高分辨率可视化和用户指定的视觉映射。支持替代物的参数空间探索允许域科学家预览模拟结果,而无需运行大量计算成本的模拟。但是,受计算资源的限制,现有的替代模型可能无法产生以可视化和分析的足够分辨率的预览。为了提高计算资源的有效利用并支持高分辨率探索,我们从不同的角度进行射线铸造以收集样品并产生紧凑的潜在表示。这种潜在的编码过程降低了替代模型培训的成本,同时保持产出质量。在模型训练阶段,我们选择观点以覆盖整个观看球体,并为所选观点提供相应的VDL-Surrogate模型。在模型推理阶段,我们在先前选择的观点上预测潜在表示,并将潜在表示形式解码为数据空间。对于任何给定的观点,我们在选定的观点上对解码数据进行插值,并使用用户指定的视觉映射生成可视化。我们展示了VDL-Surogate在宇宙学和海洋模拟中的有效性和效率,并具有定量和定性评估。源代码可在\ url {https://github.com/trainsn/vdl-surrogate}上公开获得。
translated by 谷歌翻译
在大多数有关联合学习(FL)的文献中,神经网络都是随机重量初始化的。在本文中,我们介绍了一项关于预训练对FL的影响的实证研究。具体而言,我们旨在调查当客户的分散数据是非IID时,预训练是否可以减轻急剧精度下降。我们专注于FedAvg,这是基本和最广泛使用的FL算法。我们发现,在非IID数据下,预培训确实在很大程度上缩小了FedAvg和集中学习之间的差距,但这并不是由于减轻了FedAvg的本地培训中众所周知的模型漂移问题。相反,预培训如何通过使FedAvg的全球聚合更加稳定来帮助FedAvg。当使用真实数据的预训练对于FL不可行时,我们提出了一种新型的方法,可以预先培训合成数据。在各种图像数据集(包括用于分割的一个)上,我们使用合成预训练的方法导致了显着的增益,这实质上是为扩大现实世界应用程序的联合学习而迈出的关键步骤。
translated by 谷歌翻译
已经在图形图上进行了大量研究,但是许多现有方法仅着重于优化图形布局的特定美学方面。给定图形,生成满足某些人类美学偏好的良好布局仍然是一项具有挑战性的任务,尤其是如果无法将这种偏好表示为可区分的目标函数。在本文中,我们提出了一个基于学生教师GAN的图形绘图框架SmartGD,该框架学会了绘制图形,就像人类学习执行任务一样。 SmartGD中的学生网络通过模仿良好的布局示例来学习图形,而SmartGD的教师网络负责提供有关生成布局优点的评分。当缺乏具体的审美标准来指定构成良好布局的内容时,学生网络可以从良好的布局示例中学习。另一方面,如果可以通过定量标准评估布局的好处(即使不是可区分的),学生网络可以将其用作优化目标美学的具体目标。为了实现目标,我们提出了一种新颖的gan变体,自挑战的gan,以了解有关任何审美标准的最佳布局分布,无论标准是否可区分。所提出的图形绘图框架不仅可以以与良好的布局示例相似的样式绘制图形,而且还可以根据任何给定的美学标准优化图形布局。一旦训练了模型,就可以根据示例布局的样式或所选美学标准可视化任意图。全面的实验研究表明,根据普通商定的指标,SMARTGD优于12种基准方法。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
In this paper, we investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme, where data bits are embedded in the pilot sequences and the base station simultaneously detects active devices and their embedded data bits without explicit channel estimation. Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance. Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation. The DL-mAMPnet is constructed by unfolding the AMP algorithm into a feedforward neural network, which combines the principled mathematical model of the AMP algorithm with the powerful learning capability, thereby benefiting from the advantages of both techniques. Trainable parameters are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern and the large-scale fading coefficient. Moreover, a refinement module is designed to further advance the performance by utilizing the spatial feature caused by the correlated sparsity pattern. Simulation results demonstrate that the proposed DL-mAMPnet can significantly outperform traditional algorithms in terms of the symbol error rate performance.
translated by 谷歌翻译